Evaluation of Parallel Computing on MPI Version PHITS Code
نویسندگان
چکیده
The Message Passing Interface (MPI) technique is an old solution and improvement on the Monte Carlo N-Particle Transport (MCNP) method’s enormous computational time, which has not been evaluated based PHITS code—a recently developed simulation code. We conducted simulations Varian Clinac iX 6MV phase space data from IAEA. Venselaar et al.’s method criteria were used to validate simulation. PC cluster also tested in terms of processor count bch, stands for unit calculation per operation. MPI version code’s speedup factor K-factor, represent serial portion cluster, both evaluated. All calculated met except δ2, high dose, gradient beam profile set. It was very clear that clusters with better than simple nodes up 70.6%. Additionally, shows a tendency follow Amdahl’s law. At same K-factor saturated by certain measure. study concludes arguing limitations come its composition. If we consider how improvements specifications affect this system could be more effective.
منابع مشابه
Parallel computing using MPI and OpenMP on self-configured platform, UMZHPC.
Parallel computing is a topic of interest for a broad scientific community since it facilitates many time-consuming algorithms in different application domains.In this paper, we introduce a novel platform for parallel computing by using MPI and OpenMP programming languages based on set of networked PCs. UMZHPC is a free Linux-based parallel computing infrastructure that has been developed to cr...
متن کاملparallel computing using mpi and openmp on self-configured platform, umzhpc.
parallel computing is a topic of interest for a broad scientific community since it facilitates many time-consuming algorithms in different application domains.in this paper, we introduce a novel platform for parallel computing by using mpi and openmp programming languages based on set of networked pcs. umzhpc is a free linux-based parallel computing infrastructure that has been developed to cr...
متن کاملPerformance Evaluation of Parallel Applications using MPI in Cluster Based Parallel Computing Architecture
Parallel computing operates on the principle that large problems can often be divided into smaller ones, which are then solved concurrently to save time (wall clock time) by taking advantage of non-local resources and overcoming memory constraints. The main aim is to form a cluster based parallel computing architecture for MPI based applications which demonstrates the performance gain and losse...
متن کاملParallel Computing of Kernel Density Estimates with MPI
Kernel density estimation is nowadays very popular tool for nonparametric probabilistic density estimation. One of its most important disadvantages is computational complexity of calculations needed, especially for data-based bandwidth selection and adaptation of bandwidth coefficient. The article presents parallel methods which can significantly improve calculation time. Results of using refer...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Applied sciences
سال: 2023
ISSN: ['2076-3417']
DOI: https://doi.org/10.3390/app13063782